281 research outputs found

    XML for Domain Viewpoints

    Get PDF
    Within research institutions like CERN (European Organization for Nuclear Research) there are often disparate databases (different in format, type and structure) that users need to access in a domain-specific manner. Users may want to access a simple unit of information without having to understand detail of the underlying schema or they may want to access the same information from several different sources. It is neither desirable nor feasible to require users to have knowledge of these schemas. Instead it would be advantageous if a user could query these sources using his or her own domain models and abstractions of the data. This paper describes the basis of an XML (eXtended Markup Language) framework that provides this functionality and is currently being developed at CERN. The goal of the first prototype was to explore the possibilities of XML for data integration and model management. It shows how XML can be used to integrate data sources. The framework is not only applicable to CERN data sources but other environments too.Comment: 9 pages, 6 figures, conference report from SCI'2001 Multiconference on Systemics & Informatics, Florid

    Lack of brain serotonin affects feeding and differentiation of newborn cells in the adult hypothalamus

    Get PDF
    Serotonin (5-HT) is a crucial signal in the neurogenic niche microenvironment. Dysregulation of the 5-HT system leads to mood disorders but also to changes in appetite and metabolic rate. Tryptophan hydroxylase 2-deficient (Tph2(-/-)) mice depleted of brain 5-HT display alterations in these parameters, e.g., increased food consumption, modest impairment of sleep and respiration accompanied by a less anxious phenotype. The newly discovered neural stem cell niche of the adult hypothalamus has potential implications of mediating stress responses and homeostatic functions. Using Tph2(-/-) mice, we explore stem cell behavior and cell genesis in the adult hypothalamus. Specifically, we examine precursor cell proliferation and survival in Tph2(-/-) mice at baseline and following Western-type diet (WD). Our results show a decline in BrdU numbers with aging in the absence of 5-HT. Furthermore, wild type mice under dietary challenge decrease cell proliferation and survival in the hypothalamic niche. In contrast, increased high-calorie food intake by Tph2(-/-) mice does not come along with alterations in cell numbers. However, lack of brain 5-HT results in a shift of cell phenotypes that was abolished under WD. We conclude that precursor cells in the hypothalamus retain fate plasticity and respond to environmental challenges. A novel link between 5-HT signaling and cell genesis in the hypothalamus could be exploited as therapeutic target in metabolic disease

    Large Scale Job Management and Experience in Recent Data Challenges within the LHC CMS experiment

    Get PDF
    From its conception the job management system has been distributed to increase scalability and robustness. The system consists of several applications (called ProdAgents) which manage Monte Carlo, reconstruction and skimming jobs on collections of sites within different Grid environments (OSG, NorduGrid, LCG) and submission systems such as GlideIn, local batch, etc... Production of simulated data in CMS mainly takes place on so called Tier2s (small to medium size computing centers) resources. Approximately ~50% of the CMS Tier2 resources are allocated to running simulation jobs. While the so-called Tier1s (medium to large size computing centers with high capacity tape storage systems) will be mainly used for skimming and reconstructing detector data. During the last one and a half years the job management system has been adapted such that it can be configured to convert Data Acquisition (DAQ) / High Level Trigger (HLT) output from the CMS detector to the CMS data format and manage the real time data stream from the experiment. Simultaneously the system has been upgraded to facilitate the increasing scale of the CMS production and adapting to the procedures used by its operators. In this paper we discuss the current (high level) architecture of ProdAgent, the experience in using this system in computing challenges, feedback from these challenges, and future work including migration to a set of core libraries to facilitate convergence between the different data management projects within CMS that deal with analysis, simulation, and initial reconstruction of real data. This migration is important, as it will decrease the code footprint used by these projects and increase maintainability of the code base

    The CMS Monte Carlo Production System: Development and Design

    Get PDF
    The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment

    Dynamics of the bacterial gut microbiota in preterm and term infants after intravenous amoxicillin/ceftazidime treatment

    Get PDF
    BACKGROUND: It is important to understand the consequences of pre-emptive antibiotic treatment in neonates, as disturbances in microbiota development during this key developmental time window might affect early and later life health outcomes. Despite increasing knowledge regarding the detrimental effect of antibiotics on the gut microbiota, limited research focussed on antibiotic treatment duration. We determined the effect of short and long amoxicillin/ceftazidime administration on gut microbiota development during the immediate postnatal life of preterm and term infants. METHODS: Faeces was collected from 63 (pre) term infants at postnatal weeks one, two, three, four and six. Infants received either no (control), short-term (ST) or long-term (LT) postpartum amoxicillin/ceftazidime treatment. RESULTS: Compared to control infants, ST and LT infants' microbiota contained significantly higher abundance of Enterococcus during the first two postnatal weeks at the expense of Bifidobacterium and Streptococcus. Short and long antibiotic treatment both allowed for microbiota restoration within the first six postnatal weeks. However, Enterococcus and Bifidobacterium abundances were affected in fewer ST than LT infants. CONCLUSIONS: Intravenous amoxicillin/ceftazidime administration affects intestinal microbiota composition by decreasing the relative abundance of Escherichia-Shigella and Streptococcus, while increasing the relative abundance of Enterococcus and Lactobacillus species during the first two postnatal weeks. Thriving of enterococci at the expense of bifidobacteria and streptococci should be considered as aspect of the cost-benefit determination for antibiotic prescription.</p

    Feasibility and reliability of PRISMA-Medical for specialty-based incident analysis

    Get PDF
    Aims and objectives: In this study, the feasibility and reliability of the Prevention Recovery Information System for Monitoring and Analysis (PRISMA)-Medical method for systematic, specialty-based analysis and classification of incidents in the neonatal intensive care unit (NICU) were determined. Methods: After the introduction of a Neonatology System for Analysis and Feedback on Medical Events (NEOSAFE) in eight tertiary care NICUs and one paediatric surgical ICU, PRISMA-Medical was started to be used to identify root causes of voluntary reported incidents by multidisciplinary unit patient safety committees. Committee members were PRISMA-trained and familiar with the department and its processes. In this study, the results of PRISMA-analysis of incidents reported during the first year are described. At t¿=¿3 months and t¿=¿12 months after introduction, test cases were performed to measure agreement at three levels of root cause classification using PRISMA-Medical. Inter-rater reliability was determined by calculating generalised ¿ values for each level of classification. Results: During the study period, 981 out of 1786 eligible incidents (55%) were analysed for underlying root causes. In total, 2313 root causes were identified and classified, giving an average of 2.4 root causes for every incident. Although substantial agreement (¿ 0.70–0.81) was reached at the main level of root cause classification of the test cases (discrimination between technical, organisational and human failure) and agreement among the committees at the second level (discrimination between skill-based, rule-based and knowledge-based errors) was acceptable (¿ 0.53–0.59), discrimination between rule-based errors (the third level of classification) was more difficult to assess (¿ 0.40–0.47). Conclusion: With some restraints, PRISMA-Medical proves to be both feasible and acceptably reliable to identify and classify multiple causes of medical events in the NICU

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system

    CMS Monte Carlo production in the WLCG computing Grid

    Get PDF
    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day
    corecore